EN FR
EN FR


Section: New Results

Visual Perception and Audio Rendering

Perception of Visual Artifacts in Image-Based Rendering of Façades

Participants : Peter Vangorp, Gaurav Chaurasia, Pierre-Yves Laffont, George Drettakis.

Figure 11. (a) One of the environments used in our perceptual tests, with the input cameras shown. Examples of two of the artifacts we studied, namely (b) parallax distortion and (c) ghosting.
IMG/garibaldi-overview-new.pngIMG/teaser-parallax.pngIMG/garibaldi4-subset02-2-of-18-0422.png
(a) (b) (c)

Image-based rendering (IBR) techniques allow users to create interactive 3D visualizations of scenes by taking a few snapshots (Figure 11 (left)). However, despite substantial progress in the field, the main barrier to better quality and more efficient IBR visualizations are several types of common, visually objectionable artifacts. These occur when scene geometry is approximate or viewpoints differ from the original shots, leading to parallax distortions (Figure 11 (mid)), blurring, ghosting (Figure 11 (right)) and popping errors that detract from the appearance of the scene. We argue that a better understanding of the causes and perceptual impact of these artifacts is the key to improving IBR methods.

We present a series of psychophysical experiments in which we systematically map out the perception of artifacts in IBR visualizations of façades as a function of the most common causes. We separate artifacts into different classes and measure how they impact visual appearance as a function of the number of images available, the geometry of the scene and the viewpoint. The results reveal a number of counter-intuitive effects in the perception of artifacts.

We summarize our results in terms of the following practical guidelines for improving existing and future IBR techniques:

  • When the total number of available images is small, e.g., because of storage limitations, it is preferable to use a sudden transition with its associated popping artifact rather than a gradual blending transition with its associated ghosting artifact.

  • Interestingly, the depth range of the façade does not affect the perceived parallax distortions, even though it clearly does affect the objective parallax distortions. Only the intended output viewing angle should be taken into account when capturing images.

  • For Google Street View™ -like visualizations, a shorter cross-fading transition would improve the perceived quality.

This work is a collaboration with Roland W. Fleming (Justus-Liebig-University Giessen, Germany). The work was published in the special issue of the journal Computer Graphics Forum [20] and presented at the Eurographics Symposium on Rendering 2011.

Perception of Slanted, Textured Façades

Participants : Peter Vangorp, Adrien Bousseau, Gaurav Chaurasia, George Drettakis.

In large-scale urban visualizations, buildings are often geometrically represented by simple boxes textured with images of the façades. Any depth variations in the façade, such as balconies, are perceived to have distorted angles when the viewer is not at the capture camera position. The retinal hypothesis provides the most likely prediction of the magnitude of the perceived distortion. We conduct psychophysical experiments to measure the perceived distortion, thereby validating the retinal hypothesis, and to measure the threshold for detecting any distortion. The result is a prediction of the valid range of viewer motion for a given capture.

This work is a collaboration with Martin S. Banks (UC Berkeley).

Binocular and Dynamic Cues to Glossiness

Participants : Peter Vangorp, George Drettakis.

Recent advances in display technology have made it possible to present high quality stereoscopic imagery with accurate head tracking. This improves not only depth perception but also affects the perception of glossy materials. Previous work has shown that these conditions can increase the perceived gloss by a small amount. We conduct psychophysical experiments to measure this effect quantitatively.

This work is a collaboration with Roland W. Fleming (Justus-Liebig-University Giessen, Germany) and Martin S. Banks (UC Berkeley).

Sound Particles

Participants : Charles Verron, George Drettakis.

This research deals with a sound synthesizer dedicated to particle-based environmental effects, and intended to be used in interactive virtual environments. The synthesis engine is based on five physically-inspired basic elements (sound atoms) that can be parameterized and stochastically distributed in time and space. Physically-inspired controls simultaneously drive graphics particle models (e.g., distribution of particles, average particles velocity etc.) and sound parameters (e.g., distribution of sound atoms, spectral modifications etc.). The simultaneous audio/graphics controls result in an intricate interaction between the two modalities that enhances the naturalness of the scene. The approach is currently illustrated with three environmental phenomena: fire, wind, and rain.

Sound Synthesis for Crowds

Participants : Charles Verron, George Drettakis.

We are currently investigating new methods for synthesis of crowd sounds in virtual environments. Crowd sounds are constituted of many overlapping voices spatialized at different positions in the environment. A novel level of detail for crowd sounds is desirable: the cost of spatializing many individual voice sounds can be replaced by an efficient babble noise synthesis model. Furthermore, high-level control should be proposed to modify the crowd sounds by semantic parameters, related to the crowd emotional state (e.g., calm, angry...). This research should result in a new real-time crowd sound synthesizer with semantic controls for virtual environments.